Achieving limiting distributions for Markov chains using back buttons
نویسندگان
چکیده
As a simple model for browsing the World Wide Web, we consider Markov chains with the option of moving “back” to the previous state. We develop an algorithm which uses back buttons to achieve essentially any limiting distribution on the state space. This corresponds to spending the desired total fraction of time at each web page. On finite state spaces, our algorithm always succeeds. On infinite state spaces the situation is more complicated, and is related to both the tail behaviour of the distributions, and the properties of convolution equations.
منابع مشابه
Stochastic bounds for a single server queue with general retrial times
We propose to use a mathematical method based on stochastic comparisons of Markov chains in order to derive performance indice bounds. The main goal of this paper is to investigate various monotonicity properties of a single server retrial queue with first-come-first-served (FCFS) orbit and general retrial times using the stochastic ordering techniques.
متن کاملApproximations of Quasistationary Distributions for Markov Chains
We consider a simple and widely used method for evaluating quasistationary distributions of continuous time Markov chains. The infinite state space is replaced by a large, but finite approximation, which is used to evaluate a candidate distribution. We give some conditions under which the method works, and describe some important pitfalls. μ-subinvariant measures; conditioned processes; limitin...
متن کاملSequentially Interacting Markov Chain Monte Carlo Methods
We introduce a novel methodology for sampling from a sequence of probability distributions of increasing dimension and estimating their normalizing constants. These problems are usually addressed using Sequential Monte Carlo (SMC) methods. The alternative Sequentially Interacting Markov Chain Monte Carlo (SIMCMC) scheme proposed here works by generating interacting non-Markovian sequences which...
متن کاملThe Rate of Rényi Entropy for Irreducible Markov Chains
In this paper, we obtain the Rényi entropy rate for irreducible-aperiodic Markov chains with countable state space, using the theory of countable nonnegative matrices. We also obtain the bound for the rate of Rényi entropy of an irreducible Markov chain. Finally, we show that the bound for the Rényi entropy rate is the Shannon entropy rate.
متن کاملEstimatio : : of Ihe Transition Distributions of a Markov Renewal Process
The present paper is concerned with the eetlmatlon of the transition distributions of a Markov renewal process with finitely many states, which extends and unifies nome aspects of the results in the special cases of discrete and continuous parameter Markov chains. A natural estimator of the transition distribution." is defined and shown to be consistent. Limiting distributions of this estimator...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Statistics and Computing
دوره 14 شماره
صفحات -
تاریخ انتشار 2004